User Manual FPC AS A MATLAB Solver for l1-Regularized Least Squares Problems
نویسندگان
چکیده
where x ∈ R, A ∈ R, M ∈ R, and b ∈ R, and μ > 0 is the regularization parameter. It is based upon the active-set algorithm with a continuation strategy described in [1, 2]. FPC AS is a successor of FPC [3]. While FPC AS still performs shrinkage iterations and continuation as its predecessor, most of the code has been rewritten. Compared to FPC, which has good performance on large-scale problems with highly sparse solutions, FPC AS works better overall and much better on certain difficult problems arising in compressed sensing, to name a few, those with sparse, but not highly sparse, solutions and those whose solutions have both very large and very small nonzero components (i.e., the solutions have huge dynamic ranges). In the solutions of these problems, there are certain nonzero components difficult to identify because they are either too small or have only slight advantage to represent b over some of the others. FPC AS was designed with active set identification and sub-optimization to help recover these components in the solutions.
منابع مشابه
Utilities Init Routine Parameter Setup Optimization Driver NLPLIB Solver OPTIM
The paper presents a Graphical User Interface (GUI) for nonlinear programming in Matlab. The GUI gives easy access to all features in the NLPLIB TB (NonLinear Programming LIBrary Toolbox); a set of Matlab solvers, test problems, graphical and computational utilities for unconstrained and constrained optimization, quadratic programming, unconstrained and constrained nonlinear least squares, box-...
متن کاملAn Efficient Method for Large-Scale l1-Regularized Convex Loss Minimization
Convex loss minimization with l1 regularization has been proposed as a promising method for feature selection in classification (e.g., l1-regularized logistic regression) and regression (e.g., l1-regularized least squares). In this paper we describe an efficient interior-point method for solving large-scale l1-regularized convex loss minimization problems that uses a preconditioned conjugate gr...
متن کاملScalable Matrix-valued Kernel Learning and High-dimensional Nonlinear Causal Inference
We propose a general matrix-valued multiple kernel learning framework for highdimensional nonlinear multivariate regression problems. This framework allows a broad class of mixed norm regularizers, including those that induce sparsity, to be imposed on a dictionary of vector-valued Reproducing Kernel Hilbert Spaces [19]. We develop a highly scalable and eigendecomposition-free Block coordinate ...
متن کاملAn Interior-Point Method for Large-Scale l1-Regularized Least Squares
Recently, a lot of attention has been paid to l1 regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as l1-regularized least squares programs (LSPs), which can be reformulated as convex quadratic progr...
متن کاملApproximate L0 constrained non-negative matrix and tensor factorization
Non-negative matrix factorization (NMF), i.e. V ≈ WH where both V, W and H are non-negative has become a widely used blind source separation technique due to its part based representation. The NMF decomposition is not in general unique and a part based representation not guaranteed. However, imposing sparseness both improves the uniqueness of the decomposition and favors part based representati...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2008